The iterative elimination of half the search space in Binary Search is precisely what yields its tremendous efficiency of $O(\log_2(n))$.
- To find the worst-case time complexity, we determine the maximum number of comparisons, $h$, needed to reduce the original input size $n$ to a single element.
- In the worst case, the algorithm runs for $h$ iterations until the search space contains at most one element. The size of the search space $S$ decreases geometrically:
- Initial Size: $|S_0| = n$
- After 1 comparison: $|S_1| \approx n/2$
- After $h$ comparisons: $|S_h| \approx n / 2^h$
- The process stops when the remaining size is approximately 1. $$ \frac{n}{2^h} \approx 1 \implies n \approx 2^h $$
- Taking the logarithm base 2 to solve for $h$: $$ h \approx \log_2(n) $$
- Key Takeaway: Since the work done in each iteration (calculating the pivot and comparing) takes constant time, the total running time $f(n)$ is proportional to the number of steps, $h$. Therefore, the worst-case time complexity is: $$ O(f(n)) = O(\log_2(n)) $$
Complexity Breakdown
| Component | Description | Cost |
|---|---|---|
| Worst-Case Steps (h) | Number of comparisons to reduce $n$ to 1. | $\log_2(n)$ |
| Work per Step | Midpoint calculation and one comparison. | $O(1)$ |
| Total Complexity | Steps × Work per Step. | $O(\log_2(n))$ |